A modification of steepest descent method for solving large-scaled unconstrained optimization problems
نویسندگان
چکیده
منابع مشابه
A Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems
In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...
متن کاملA Modified Algorithm of Steepest Descent Method for Solving Unconstrained Nonlinear Optimization Problems
The steepest descent method (SDM), which can be traced back to Cauchy (1847), is the simplest gradient method for unconstrained optimization problem. The SDM is effective for well-posed and low-dimensional nonlinear optimization problems without constraints; however, for a large-dimensional system, it converges very slowly. Therefore, a modified steepest decent method (MSDM) is developed to dea...
متن کاملSteepest descent method for solving zero-one nonlinear programming problems
In this paper we use steepest descent method for solving zero-one nonlinear programming problem. Using penalty function we transform this problem to an unconstrained optimization problem and then by steepest descent method we obtain the original problem optimal solution. 2007 Elsevier Inc. All rights reserved.
متن کاملA New Steepest Descent Differential Inclusion-Based Method for Solving General Nonsmooth Convex Optimization Problems
In this paper, we investigate a steepest descent neural network for solving general nonsmooth convex optimization problems. The convergence to optimal solution set is analytically proved. We apply the method to some numerical tests which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.
متن کاملOn the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
It is shown that the steepest descent and Newton’s method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(ǫ) to drive the norm of the gradient below ǫ. This shows that the upper bound of O(ǫ) evaluations known for the steepest descent is tight, and that Newton’s method may be as slow a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Engineering & Technology
سال: 2018
ISSN: 2227-524X
DOI: 10.14419/ijet.v7i3.28.20969